Goto

Collaborating Authors

 programming environment



MazeMate: An LLM-Powered Chatbot to Support Computational Thinking in Gamified Programming Learning

Hou, Chenyu, Yu, Hua, Zhu, Gaoxia, Anas, John Derek, Liu, Jiao, Ong, Yew Soon

arXiv.org Artificial Intelligence

Computational Thinking (CT) is a foundational problem-solving skill, and gamified programming environments are a widely adopted approach to cultivating it. While large language models (LLMs) provide on-demand programming support, current applications rarely foster CT development. We present MazeMate, an LLM-powered chatbot embedded in a 3D Maze programming game, designed to deliver adaptive, context-sensitive scaffolds aligned with CT processes in maze solving and maze design. We report on the first classroom implementation with 247 undergraduates. Students rated MazeMate as moderately helpful, with higher perceived usefulness for maze solving than for maze design. Thematic analysis confirmed support for CT processes such as decomposition, abstraction, and algorithmic thinking, while also revealing limitations in supporting maze design, including mismatched suggestions and fabricated algorithmic solutions. These findings demonstrate the potential of LLM-based scaffolding to support CT and underscore directions for design refinement to enhance MazeMate usability in authentic classrooms.



Empowering Children to Create AI-Enabled Augmented Reality Experiences

Zhang, Lei, Zhou, Shuyao, Liaqat, Amna, Mak, Tinney, Berengard, Brian, Qian, Emily, Monroy-Hernández, Andrés

arXiv.org Artificial Intelligence

Despite their potential to enhance children's learning experiences, AI-enabled AR technologies are predominantly used in ways that position children as consumers rather than creators. We introduce Capybara, an AR-based and AI-powered visual programming environment that empowers children to create, customize, and program 3D characters overlaid onto the physical world. Capybara enables children to create virtual characters and accessories using text-to-3D generative AI models, and to animate these characters through auto-rigging and body tracking. In addition, our system employs vision-based AI models to recognize physical objects, allowing children to program interactive behaviors between virtual characters and their physical surroundings. We demonstrate the expressiveness of Capybara through a set of novel AR experiences. We conducted user studies with 20 children in the United States and Argentina. Our findings suggest that Capybara can empower children to harness AI in authoring personalized and engaging AR experiences that seamlessly bridge the virtual and physical worlds.


TiniScript: A Simplified Language for Educational Robotics

Ramos, Gabriel Gonzalo Guzman, Ramos, Pedro Jesus Guzman

arXiv.org Artificial Intelligence

The constructionism theory, formulated by Seymour Papert, has been a transformative approach in education, particularly within STEM (Science, Technology, Engineering, and Mathematics) fields. This theory emphasizes learning through creation, where students engage actively by building knowledge structures through hands-on tasks and meaningful projects. One of the early milestones influenced by constructionism was the development of the Logo programming language. Logo's simple, block-based structure enabled students to grasp fundamental programming concepts visually by manipulating blocks, establishing a foundation for educational tools that remain essential in early computer science education. Over time, educational robotics kits, like those from LEGO Education (RCX, NXT, and EV3), have set standards for integrating physical construction with software programming. These kits demonstrate the potential of robotics in educational settings by engaging students in both mechanical assembly and logical problem-solving, thereby fostering an understanding of hardware and software as interconnected aspects of robotics. Building on this foundation, programming environments in educational robotics have largely adopted block-based interfaces. These environments simplify coding for beginners, allowing students to create programs by connecting blocks representing specific actions. Once completed, the program is uploaded to a microcontroller, enabling the robot to execute the instructions.


Autocorrect Is Not: People Are Multilingual and Computer Science Should Be Too

Communications of the ACM

Computer science has a language problem--and we are not alluding to programming languages. Many prevalent, flawed views about natural human language are limiting who is in computer science and what people can accomplish with the technology we build. To start, computer science centers around the English language, and that produces technologies that work poorly for many people. As Manuel Pérez-Quiñonesa points out, when developers make assumptions about English as the default language, navigating digital device interfaces can be frustrating, even for a professional computer scientist fluent in English such as Pérez-Quiñones. Poor multilingual or character-encoding support, incorrect cultural norms baked into software, and so on--these challenges confront users all over the world.




Synthesizing a Progression of Subtasks for Block-Based Visual Programming Tasks

Tercan, Alperen, Ghosh, Ahana, Eniser, Hasan Ferit, Christakis, Maria, Singla, Adish

arXiv.org Artificial Intelligence

Block-based visual programming environments play an increasingly important role in introducing computing concepts to K-12 students. In recent years, they have also gained popularity in neuro-symbolic AI, serving as a benchmark to evaluate general problem-solving and logical reasoning skills. The open-ended and conceptual nature of these visual programming tasks make them challenging, both for state-of-the-art AI agents as well as for novice programmers. A natural approach to providing assistance for problem-solving is breaking down a complex task into a progression of simpler subtasks; however, this is not trivial given that the solution codes are typically nested and have non-linear execution behavior. In this paper, we formalize the problem of synthesizing such a progression for a given reference block-based visual programming task. We propose a novel synthesis algorithm that generates a progression of subtasks that are high-quality, well-spaced in terms of their complexity, and solving this progression leads to solving the reference task. We show the utility of our synthesis algorithm in improving the efficacy of AI agents (in this case, neural program synthesizers) for solving tasks in the Karel programming environment (Pattis et al., 1995). Then, we conduct a user study to demonstrate that our synthesized progression of subtasks can assist a novice programmer in solving tasks in the Hour of Code: Maze Challenge (Code.org,


A deep learning framework for epileptic seizure detection based on neonatal EEG signals - Scientific Reports

#artificialintelligence

Electroencephalogram (EEG) is one of the main diagnostic tests for epilepsy. The detection of epileptic activity is usually performed by a human expert and is based on finding specific patterns in the multi-channel electroencephalogram. This is a difficult and time-consuming task, therefore various attempts are made to automate it using both conventional and Deep Learning (DL) techniques. Unfortunately, authors do not often provide sufficiently detailed and complete information to be able to reproduce their results. Our work is intended to fill this gap. Using a carefully selected 79 neonatal EEG recordings we developed a complete framework for seizure detection using DL approch. We share a ready to use R and Python codes which allow: (a) read raw European Data Format files, (b) read data files containing the seizure annotations made by human experts, (c) extract train, validation and test data, (d) create an appropriate Convolutional Neural Network (CNN) model, (e) train the model, (f) check the quality of the neural classifier, (g) save all learning results.